- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0003100000000000
- More
- Availability
-
31
- Author / Contributor
- Filter by Author / Creator
-
-
Cen, Zhepeng (4)
-
Zhao, Ding (4)
-
Liu, Zuxin (3)
-
Li, Bo (2)
-
Yao, Yihang (2)
-
Arief, Mansur (1)
-
Huang, Peide (1)
-
Huang, Zhiyuan (1)
-
Isenbaev, Vladislav (1)
-
Lam, Henry (1)
-
Liu, Wei (1)
-
Liu, Zhenyuan (1)
-
Wu, Steven (1)
-
Yu, Wenhao (1)
-
Zhang, Tingnan (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 10, 2025
-
Cen, Zhepeng; Yao, Yihang; Liu, Zuxin; Zhao, Ding (, Proceedings of the 41st International Conference on Machine Learning)
-
Arief, Mansur; Cen, Zhepeng; Liu, Zhenyuan; Huang, Zhiyuan; Li, Bo; Lam, Henry; Zhao, Ding (, 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau)
-
Liu, Zuxin; Cen, Zhepeng; Isenbaev, Vladislav; Liu, Wei; Wu, Steven; Li, Bo; Zhao, Ding (, Proceedings of the 39th International Conference on Machine Learning)afe reinforcement learning (RL) aims to learn policies that satisfy certain constraints before deploying them to safety-critical applications. Previous primal-dual style approaches suffer from instability issues and lack optimality guarantees. This paper overcomes the issues from the perspective of probabilistic inference. We introduce a novel Expectation-Maximization approach to naturally incorporate constraints during the policy learning: 1) a provable optimal non-parametric variational distribution could be computed in closed form after a convex optimization (E-step); 2) the policy parameter is improved within the trust region based on the optimal variational distribution (M-step). The proposed algorithm decomposes the safe RL problem into a convex optimization phase and a supervised learning phase, which yields a more stable training performance. A wide range of experiments on continuous robotic tasks shows that the proposed method achieves significantly better constraint satisfaction performance and better sample efficiency than baselines. The code is available at https://github.com/liuzuxin/cvpo-safe-rl.more » « less
An official website of the United States government

Full Text Available